Goto

Collaborating Authors

 aaai conference





Congratulations to the #AAAI2026 award winners

AIHub

A number of prestigious AAAI awards were presented during the official opening ceremony of the Fortieth AAAI Conference on Artificial Intelligence (AAAI 2026) in Singapore, on Thursday 22 January. The AAAI Award for Artificial Intelligence for Humanity recognises the positive impacts of artificial intelligence to protect, enhance, and improve human life in meaningful ways with long-lived effects. The winner of this year's award is Shakir Mohamed Shakir has been recognised for . The Robert S. Engelmore Memorial Award recognises outstanding contributions to automated planning, machine learning and robotics, their application to real-world problems and extensive service to the AI community. The annual AAAI/EAAI Outstanding Educator award was created to honour a person (or group of people) who has made major contributions to AI education that provide long-lasting benefits to the AI community and society as a whole.



Congratulations to the #AAAI2026 outstanding paper award winners

AIHub

We consider the problem of modifying a description logic concept in light of models represented as pointed interpretations. We call this setting model change, and distinguish three main kinds of changes: eviction, which consists of only removing models; reception, which incorporates models; and revision, which combines removal with incorporation of models in a single operation. We introduce a formal notion of revision and argue that it does not reduce to a simple combination of eviction and reception, contrary to intuition. We provide positive and negative results on the compatibility of eviction and reception for EL-bottom and ALC description logic concepts and on the compatibility of revision for ALC concepts.


Analyzing Planner Design Trade-offs for MAPF under Realistic Simulation

Yan, Jingtian, Li, Zhifei, Kang, William, Smith, Stephen F., Li, Jiaoyang

arXiv.org Artificial Intelligence

Multi-Agent Path Finding (MAPF) algorithms are increasingly deployed in industrial warehouses and automated manufacturing facilities, where robots must operate reliably under real-world physical constraints. However, existing MAPF evaluation frameworks typically rely on simplified robot models, leaving a substantial gap between algorithmic benchmarks and practical performance. Recent frameworks such as SMART, incorporate kinodynamic modeling and offer the MAPF community a platform for large-scale, realistic evaluation. Building on this capability, this work investigates how key planner design choices influence performance under realistic execution settings. We systematically study three fundamental factors: (1) the relationship between solution optimality and execution performance, (2) the sensitivity of system performance to inaccuracies in kinodynamic modeling, and (3) the interaction between model accuracy and plan optimality. Empirically, we examine these factors to understand how these design choices affect performance in realistic scenarios. We highlight open challenges and research directions to steer the community toward practical, real-world deployment.


Rethinking Multimodal Point Cloud Completion: A Completion-by-Correction Perspective

Luo, Wang, Wu, Di, Na, Hengyuan, Zhu, Yinlin, Hu, Miao, Quan, Guocong

arXiv.org Artificial Intelligence

Point cloud completion aims to reconstruct complete 3D shapes from partial observations, which is a challenging problem due to severe occlusions and missing geometry. Despite recent advances in multimodal techniques that leverage complementary RGB images to compensate for missing geometry, most methods still follow a Completion-by-Inpainting paradigm, synthesizing missing structures from fused latent features. We empirically show that this paradigm often results in structural inconsistencies and topological artifacts due to limited geometric and semantic constraints. To address this, we rethink the task and propose a more robust paradigm, termed Completion-by-Correction, which begins with a topologically complete shape prior generated by a pre-trained image-to-3D model and performs feature-space correction to align it with the partial observation. This paradigm shifts completion from unconstrained synthesis to guided refinement, enabling structurally consistent and observation-aligned reconstruction. Building upon this paradigm, we introduce PGNet, a multi-stage framework that conducts dual-feature encoding to ground the generative prior, synthesizes a coarse yet structurally aligned scaffold, and progressively refines geometric details via hierarchical correction. Experiments on the ShapeNetViPC dataset demonstrate the superiority of PGNet over state-of-the-art baselines in terms of average Chamfer Distance (-23.5%) and F-score (+7.1%).